Current Issue : January-March Volume : 2023 Issue Number : 1 Articles : 5 Articles
Soft robotic modules have potential use for therapeutic and educational purposes. To do so, they need to be safe, soft, smart, and customizable to serve individuals’ different preferences and personalities. A safe modular robotic product made of soft materials, particularly silicon, programmed by artificial intelligence algorithms and developed via additive manufacturing would be promising. This study focuses on the safe tactile interaction between humans and robots by means of soft material characteristics for translating physical communication to auditory. The embedded vibratory sensors used to stimulate touch senses transmitted through soft materials are presented. The soft module was developed and verified successfully to react to three different patterns of human–robot contact, particularly users’ touches, and then communicate the type of contact with sound. The study develops and verifies a model that can classify different tactile gestures via machine learning algorithms for safe human–robot physical interaction. The system accurately recognizes the gestures and shapes of three-dimensional (3D) printed soft modules. The gestures used for the experiment are the three most common, including slapping, squeezing, and tickling. The model builds on the concept of how safe human–robot physical interactions could help with cognitive and behavioral communication. In this context, the ability to measure, classify, and reflect the behavior of soft materials in robotic modules represents a prerequisite for endowing robotic materials in additive manufacturing for safe interaction with humans....
(1) Objective: To investigate the feasibility, safety, and effectiveness of a brain–computer interface (BCI) system with visual and motor feedback in limb and brain function rehabilitation after stroke. (2) Methods: First, we recruited three hemiplegic stroke patients to perform rehabilitation training using a BCI system with visual and motor feedback for two consecutive days (four sessions) to verify the feasibility and safety of the system. Then, we recruited five other hemiplegic stroke patients for rehabilitation training (6 days a week, lasting for 12–14 days) using the same BCI system to verify the effectiveness. The mean and Cohen’s w were used to compare the changes in limb motor and brain functions before and after training. (3) Results: In the feasibility verification, the continuous motor state switching time (CMSST) of the three patients was 17.8 ± 21.0s, and the motor state percentages (MSPs) in the upper and lower limb training were 52.6 ± 25.7% and 72.4 ± 24.0%, respectively. The effective training revolutions (ETRs) per minute were 25.8 ± 13.0 for upper limb and 24.8 ± 6.4 for lower limb. There were no adverse events during the training process. Compared with the baseline, the motor function indices of the five patients were improved, including sitting balance ability, upper limb Fugel–Meyer assessment (FMA), lower limb FMA, 6 min walking distance, modified Barthel index, and root mean square (RMS) value of triceps surae, which increased by 0.4, 8.0, 5.4, 11.4, 7.0, and 0.9, respectively, and all had large effect sizes (Cohen’s w ≥ 0.5). The brain function indices of the five patients, including the amplitudes of the motor evoked potentials (MEP) on the non-lesion side and lesion side, increased by 3.6 and 3.7, respectively; the latency of MEP on the non-lesion side was shortened by 2.6 ms, and all had large effect sizes (Cohen’s w ≥ 0.5). (4) Conclusions: The BCI system with visual and motor feedback is applicable in active rehabilitation training of stroke patients with hemiplegia, and the pilot results show potential multidimensional benefits after a short course of treatment....
Over the years, the purpose of cultural heritage (CH) sites (e.g., museums) has focused on providing personalized services to different users, with the main goal of adapting those services to the visitors’ personal traits, goals, and interests. In this work, we propose a computational cognitive model that provides an artificial agent (e.g., robot, virtual assistant) with the capability to personalize a museum visit to the goals and interests of the user that intends to visit the museum by taking into account the goals and interests of the museum curators that have designed the exhibition. In particular, we introduce and analyze a special type of help (critical help) that leads to a substantial change in the user’s request, with the objective of taking into account the needs that the same user cannot or has not been able to assess. The computational model has been implemented by exploiting the multi-agent oriented programming (MAOP) framework JaCaMo, which integrates three different multi-agent programming levels. We provide the results of a pilot study that we conducted in order to test the potential of the computational model. The experiment was conducted with 26 real participants that have interacted with the humanoid robot Nao, widely used in Human-Robot interaction (HRI) scenarios....
Minimally invasive surgery has a smaller incision area than traditional open surgery, which can greatly reduce damage to the human body and improve the utilization of medical devices. However, minimally invasive surgery also has disadvantages such as limited flexibility and operational characteristics. The interactive minimally invasive surgical robot system not only improves the stability, safety, and accuracy of minimally invasive surgery but also introduces force feedback in controlling the surgical robot, which is a new development direction in the field of minimally invasive surgery. This paper reviews the development status of interactive minimally invasive surgical robotic systems and key technologies to achieve human-robot interaction and finally provides an outlook and summary of its development. Fuzzy theory and reinforcement learning are introduced into the parameter adjustment process of the variable guide control model, and a human-robot interaction method for minimally invasive surgical robot posture adjustment is proposed....
The paper aims to apply the deep learning-based image visualization technology to extract, recognize, and analyze human skeleton movements and evaluate the effect of the deep learning-based human-computer interaction (HCI) system. Dance education is researched. Firstly, the Visual Geometry Group Network (VGGNet) is optimized using Convolutional Neural Network (CNN). Then, the VGGNet extracts the human skeleton movements in the OpenPose database. Secondly, the Long Short-Term Memory (LSTM) network is optimized and recognizes human skeleton movements. Finally, an HCI system for dance education is designed based on the extraction and recognition methods of human skeleton movements. Results demonstrate that the highest extraction accuracy is 96%, and the average recognition accuracy of different dance movements is stable. The effectiveness of the proposed model is verified. The recognition accuracy of the optimized F-Multiple LSTMs is increased to 88.9%, suitable for recognizing human skeleton movements. The dance education HCI system’s interactive accuracy built by deep learning-based visualization technology reaches 92%; the overall response time is distributed between 5.1 s and 5.9 s. Hence, the proposed model has excellent instantaneity. Therefore, the deep learning-based image visualization technology has enormous potential in human movement recognition, and combining deep learning and HCI plays a significant role....
Loading....